Free Download Sign-up Form
* Email
First Name
* = Required Field


Mind Your Head Brain Training Book by Sue Stebbins and Carla Clark
New!
by Sue Stebbins &
Carla Clark

Paperback Edition

Kindle Edition

Are You Ready to Breakthrough to Freedom?
Find out
Take This Quiz

Business Breakthrough CDs

Over It Already

Amazing Clients
~ Ingrid Dikmen Financial Advisor, Senior Portfolio Manager


~ Mike M - Finance Professional

Social Media Sue Stebbins on Facebook

Visit Successwave's Blog!

Subscribe to the Successwaves RSS Feed

Cortical Integration: Possible Solutions to the Binding and Linking Problems in Perception, Reasoning and Long Term Memory

Nick Bostrom

1 | 2 | 3 | 4 | 5 | 6 | 7

Page 2

Source: http://www.nickbostrom.com/old/cortical.html

3. Integration through convolution

3.1 Recursion and compression as an integration mechanism.
One approach to the integration problem is to use a recurrent network. This was originally proposed by Pollack (1990) as a way of representing compositional symbolic structures such as lists and trees in neural networks. The principle behind the Recursive Auto-Associative Memory (or RAAM) is easily explained. Consider an sequence A, B, C, D... that is to be stored in the memory. The first step is to feed the representations of A and B to the input layer. The input is then forwarded by the network to the hidden layer, which is typically a bottle neck; in the simplest case it consists of half as many neurons as the input layer. So some compressed representation (AB) of A and B results in the hidden layer as a consequence of the input. Now (AB) is fed back to the input layer where it is combined with C. A new representation in the hidden layer, (ABC), results, which, in its turn, is combined with D; and so forth. Two things are needed if this is to work. First, we have to make sure that the compressed representation conserve the information contained in their constituents. Second, a mechanism must be trained that can develop the original sequence from its compressed form. Both these things are taken care of simultaneously in the training procedure of the RAAM. To achieve this, the hidden layer is connected to a third layer, the output layer. Now, for each training pattern, the network is trained by a backprop algorithm to reproduce the input pattern in the output layer. In this way, the first two layers come to act as an encoder, while the last two layers do the work of a decoder. What distinguishes the RAAM from a plain encoder/decoder network is that the training patterns are not determined in advance but are made to depend on what patterns emerge in the hidden layer after each update. This is what makes possible the construction of "convoluted" representations that can be unfolded and developed stepwise through a circular process.

Advantages with the RAAM include: openness of memory capacity (overloading the RAAM gradually increases the frequency of retrieval failures, but there is no sharp upper bound on the capacity); generalization ability (rather modest in Pollack's simulations); some degree of biological plausibility (to be considered later); and a standardized form of representation (the compressed patterns that appear in the hidden layer can have very different structures, but they are all of the same size, which may facilitate further processing).

An extension of the RAAM has been simulated by Reilly (1991), who prefixed a standard simple recurrent network (SRN) (Elman (1990)) to the RAAM. This enables the system to perform some simple on-line parsing of verbal input. The prize paid is that learning gets more difficult, and the adequate RAAM representations have to be know in advance, before the SRN can be trained. The performance was also limited in several ways (some of which might probably be avoided if the SRN is replaced by something more powerful, such as the auto-associative recurrent network (AARN) developed by Maskara at al. (1993)).


3.2 The biological plausibility of binding by recursion.

The RAAM clearly has some attractive properties, but how could it be implemented in the brain?

There are plenty of feedback loops on different levels in the nervous system. Within cortex there is recursive connection between the different layers, and between different areas. On a larger scale, there is, for instance, the loop from cortex down to the basal ganglia and back again. This latter alternative has the consequence that the compressed representations would appear down in the basal ganglia instead of in neocortex, unless they were somehow transferred back up to the relevant cortical areas. From lesion studies (e.g. Gainotti et al. (1996)) it appears that the main basis of conceptual representations is in the cortex, so the hypothesis that the basal ganglia serve as the bottleneck in an RAAM seems implausible. The same observation seems to rule out the hippocampus as the site of the RAAM.

Thus it seems more likely that the RAAM would be instantiated within the cortex. It would be interesting to have a biologically realistic simulation study of the potential of cortex to harbor a RAAM between different layers or between cortical areas cortex, vertically or horizontally. One possibility is that there is not one big RAAM, but rather multiple small scale RAAM-like structures, some of which might have their compressed representations forwarded as partial inputs to other RAAM-structures. This way an architectural structure might be built up that could be operated upon at different levels depending on the degree of detail required for the task at hand; synopsis would be combined with richness in content. So far, these ideas have not been elaborated.

One difficulty with RAAM like systems is that it is not obvious how the learning would proceed if they are biologically implemented. In the simulations, they are trained by backpropagation, and the same training patterns are circled though innumerable times while the weights are slowly adapting. But cortex does not use backpropagation in any ordinary sense, and some types of learning require but a single presentation. Until this crucial issue about learning is clarified, there is no RAAM-theory of cortical integration. One interesting approach, that could possibly overcome these obstacles that combines ideas from the RAAM with attractor net theory will be discussed in section 5.

 

 

1 | 2 | 3 | 4 | 5 | 6 | 7

We Make it Easy to Succeed
Successwaves, Intl.
Brain Based Accelerated Success Audios

Successwaves Smart Coaching Audio